Conversation
📝 WalkthroughWalkthroughThis PR adds screen-share audio mixing: Android fetches and mixes screen PCM into the mic buffer and exposes media-projection permission data; iOS adds in-app ReplayKit-based screen capture, audio conversion/ring-buffer/mixer utilities, and ties in-app capture into the existing track creation and audio pipeline. Changes
Sequence Diagram(s)sequenceDiagram
participant RPS as ReplayKit<br/>RPScreenRecorder
participant InApp as InAppScreenCapturer
participant Converter as ScreenShareAudioConverter
participant Ring as AudioRingBuffer
participant Mixer as ScreenShareAudioMixer
participant WebRTC as WebRTC Audio<br/>Processing
RPS->>InApp: deliver video & audioApp samples
InApp->>Converter: audioApp CMSampleBuffer
Converter->>Ring: write converted PCM (float frames)
WebRTC->>Mixer: audioProcessingProcess(rtcAudioBuffer)
Mixer->>Ring: read(availableFrames)
Mixer->>Mixer: mix (additive, clamp) into rtcAudioBuffer
Mixer->>WebRTC: return mixed audioBuffer
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
android/src/main/java/com/oney/WebRTCModule/GetUserMediaImpl.java (1)
42-42: Visibility change exposes MediaProjection permission data to external packages.Making
GetUserMediaImplpublic and adding thegetMediaProjectionPermissionResultData()getter enables external code to access the MediaProjection permission Intent. This is required for the audio capture feature, but consider:
- Documenting this class/method as internal API to discourage misuse
- The returned Intent could be used to create additional MediaProjections without user consent (though Android should enforce single-use)
Also applies to: 65-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@android/src/main/java/com/oney/WebRTCModule/GetUserMediaImpl.java` at line 42, The class GetUserMediaImpl and its getter getMediaProjectionPermissionResultData() expose the MediaProjection permission Intent publicly; restrict access by making the class and method package-private (remove public) or annotate them as internal (e.g., `@RestrictTo`(RestrictTo.Scope.LIBRARY)) and add a Javadoc comment marking them as internal API; additionally, avoid returning the raw Intent instance - document that callers must not reuse it and consider returning a safe copy or nulling/clearing the stored Intent after it has been consumed in GetUserMediaImpl to prevent external code from creating additional MediaProjections without consent.ios/RCTWebRTC/InAppScreenCapturer.m (1)
125-135: Observer removal may execute on a different thread.Observers are added on the main queue but
unregisterAppStateObserversmay be called from any thread. WhileNSNotificationCenteris thread-safe, there's a potential issue: if removal happens before the asyncregisterAppStateObserversdispatch executes, the flag will be cleared but observers will still be added afterward.Consider dispatching removal to main queue as well, or using the fix suggested for
registerAppStateObserverswhich handles this case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/RCTWebRTC/InAppScreenCapturer.m` around lines 125 - 135, unregisterAppStateObservers can be called off-main and races with the async registerAppStateObservers dispatch; ensure removal runs on the main queue and mirrors the register guard: dispatch the removal block to the main queue, check and set the _observingAppState flag inside that main-queue block, and then call [[NSNotificationCenter defaultCenter] removeObserver:... ] for UIApplicationDidBecomeActiveNotification and UIApplicationWillResignActiveNotification; alternatively, adopt the same post-dispatch re-check pattern used in registerAppStateObservers so observers aren’t added after you clear _observingAppState.ios/RCTWebRTC/InAppScreenCaptureController.m (1)
33-39: Minor: Redundant deviceId fallback.The
deviceIdis set to@"in-app-screen-capture"ininitWithCapturer:(Line 16), so the nil-coalescing operator on Line 35 is unnecessary sinceself.deviceIdwill never be nil after initialization.♻️ Suggested simplification
- (NSDictionary *)getSettings { return @{ - @"deviceId": self.deviceId ?: @"in-app-screen-capture", + @"deviceId": self.deviceId, @"groupId": @"", @"frameRate": @(30) }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/RCTWebRTC/InAppScreenCaptureController.m` around lines 33 - 39, The getSettings method uses a redundant nil-coalescing fallback for deviceId even though initWithCapturer: already sets self.deviceId to @"in-app-screen-capture"; update -[InAppScreenCaptureController getSettings] to use self.deviceId directly (remove the ?: @"in-app-screen-capture" fallback) so the dictionary simply assigns @"deviceId": self.deviceId, keeping the rest of the keys unchanged.ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swift (1)
108-137: Consider extracting format comparison to avoid duplication.Lines 119-122 duplicate the format comparison logic from
ScreenShareAudioConverter.formatsMatch(). Consider exposing that method or usingconvertIfRequireddirectly (it already has the identity optimization).Simplified approach
- // 3. Convert to graph format (e.g. 48 kHz / 1 ch / float32) - let buffer: AVAudioPCMBuffer - if pcm.format.sampleRate != targetFormat.sampleRate - || pcm.format.channelCount != targetFormat.channelCount - || pcm.format.commonFormat != targetFormat.commonFormat - || pcm.format.isInterleaved != targetFormat.isInterleaved { - guard let converted = audioConverter.convertIfRequired(pcm, to: targetFormat) else { return } - buffer = converted - } else { - buffer = pcm - } + // 3. Convert to graph format if needed (converter handles identity case) + guard let buffer = audioConverter.convertIfRequired(pcm, to: targetFormat) else { return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swift` around lines 108 - 137, The format comparison in enqueue(_:) duplicates logic in ScreenShareAudioConverter.formatsMatch(); change enqueue to avoid manual field-by-field checks by either calling ScreenShareAudioConverter.formatsMatch(pcm.format, targetFormat) or simply always calling audioConverter.convertIfRequired(pcm, to: targetFormat) and using the returned buffer (it already returns the original when no conversion is needed), then proceed to schedule and play the buffer; update references in enqueue to use audioConverter.pcmBuffer(from:), audioConverter.convertIfRequired(...), and playerNode.scheduleBuffer(...) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@android/src/main/java/com/oney/WebRTCModule/GetUserMediaImpl.java`:
- Around line 367-370: Retainment of mediaProjectionPermissionResultData can
leak a stale Intent; update GetUserMediaImpl to clear
mediaProjectionPermissionResultData when the MediaProjection is revoked or when
ScreenCapturerAndroid.onStop() is called and also null out displayMediaPromise
there; implement this by hooking into the MediaProjection callback/stop path
(where the MediaProjection is released) and calling
mediaProjectionPermissionResultData = null (and displayMediaPromise = null if
applicable), and add a freshness check inside
getMediaProjectionPermissionResultData() to validate the Intent (e.g., ensure a
live MediaProjection or timestamp/flag) and return null or throw if it is stale
so external callers cannot reuse an invalid Intent.
In `@android/src/main/java/com/oney/WebRTCModule/WebRTCModule.java`:
- Around line 166-185: The mixing loop can write past the mic buffer and perform
pointless writes; change the logic so it never writes beyond micShorts' valid
range by bounding the loop to the mic buffer capacity: compute micSamples =
Math.min(bytesRead / 2, micShorts.remaining()), keep screenSamples =
screenShorts.remaining(), then iterate i from 0 to micSamples (not totalSamples)
and for each i if i >= screenSamples copy screenShorts.get(i) into micShorts
only when i < micShorts.remaining(); otherwise add micShorts.get(i) +
screenShorts.get(i) when both exist, clamp to Short range, and write with
micShorts.put(i). Ensure no breaks that skip processing and avoid writing when
micShorts has no capacity.
In `@ios/RCTWebRTC/InAppScreenCapturer.m`:
- Around line 109-123: The race occurs because _observingAppState is set to YES
before observers are actually added on the main queue in
registerAppStateObservers, allowing unregisterAppStateObservers to run and skip
removal; fix by moving the state change and the addObserver calls together on
the main queue: in registerAppStateObservers dispatch to the main queue and
inside that block check _observingAppState, set _observingAppState = YES, then
call addObserver for appDidBecomeActive and appWillResignActive (or
alternatively use dispatch_sync to ensure immediate registration), and ensure
unregisterAppStateObservers likewise removes observers on the main queue and
sets _observingAppState = NO to keep the flag consistent with actual observer
registration.
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioConverter.swift`:
- Around line 90-95: In ScreenShareAudioConverter.swift the int16 branch that
checks pcmBuffer.int16ChannelData always performs a single interleaved memcpy
which breaks non-interleaved input when avFormat.isInterleaved is false; update
the int16 path to mirror the float handling: when avFormat.isInterleaved is true
do the single memcpy using bytesPerFrame and frameCount, otherwise loop
per-channel (using avFormat.formatDescription.mChannelsPerFrame / channelCount),
compute per-channel byte stride (bytesPerFrame / channelCount), and memcpy each
pcmBuffer.int16ChannelData[channel] from the proper offset in dataPointer for
frameCount bytes (respecting totalLength), similar to how the float
non-interleaved case is handled.
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swift`:
- Around line 85-93: startMixing() currently only sets isMixing and does not
perform the ADM reconfiguration described in the comment; update startMixing()
to, after setting isMixing = true, either (A) request an ADM reconfigure so
onConfigureInputFromSource runs again (call the ADM reconfiguration/reset
method) or (B) immediately wire the mixer nodes using the ADM's cached context:
grab AudioDeviceModule.lastInputSource, lastInputDestination and lastInputFormat
and run the same node-attachment/configuration logic used in
onConfigureInputFromSource to attach your mixer nodes; reference startMixing(),
onConfigureInputFromSource, and
AudioDeviceModule.lastInputSource/lastInputDestination/lastInputFormat when
making the change.
In `@ios/RCTWebRTC/WebRTCModule`+RTCMediaStream.m:
- Around line 217-234: The screen-share audio path is not wired: when
options.useInAppScreenCapture is true you must set
InAppScreenCapturer.audioBufferHandler and, if options.includeScreenShareAudio
is enabled, create/retain a ScreenShareAudioMixer and call [mixer startMixing]
before starting capture; attach the handler to forward audio into the mixer and
store the mixer reference (e.g., on the module or options) so you can call
[mixer stopMixing] from mediaStreamTrackRelease; update the block that creates
InAppScreenCapturer (and options.activeInAppScreenCapturer) to set
audioBufferHandler, check options.includeScreenShareAudio, instantiate and start
the mixer accordingly, and ensure mediaStreamTrackRelease stops and releases the
mixer.
---
Nitpick comments:
In `@android/src/main/java/com/oney/WebRTCModule/GetUserMediaImpl.java`:
- Line 42: The class GetUserMediaImpl and its getter
getMediaProjectionPermissionResultData() expose the MediaProjection permission
Intent publicly; restrict access by making the class and method package-private
(remove public) or annotate them as internal (e.g.,
`@RestrictTo`(RestrictTo.Scope.LIBRARY)) and add a Javadoc comment marking them as
internal API; additionally, avoid returning the raw Intent instance - document
that callers must not reuse it and consider returning a safe copy or
nulling/clearing the stored Intent after it has been consumed in
GetUserMediaImpl to prevent external code from creating additional
MediaProjections without consent.
In `@ios/RCTWebRTC/InAppScreenCaptureController.m`:
- Around line 33-39: The getSettings method uses a redundant nil-coalescing
fallback for deviceId even though initWithCapturer: already sets self.deviceId
to @"in-app-screen-capture"; update -[InAppScreenCaptureController getSettings]
to use self.deviceId directly (remove the ?: @"in-app-screen-capture" fallback)
so the dictionary simply assigns @"deviceId": self.deviceId, keeping the rest of
the keys unchanged.
In `@ios/RCTWebRTC/InAppScreenCapturer.m`:
- Around line 125-135: unregisterAppStateObservers can be called off-main and
races with the async registerAppStateObservers dispatch; ensure removal runs on
the main queue and mirrors the register guard: dispatch the removal block to the
main queue, check and set the _observingAppState flag inside that main-queue
block, and then call [[NSNotificationCenter defaultCenter] removeObserver:... ]
for UIApplicationDidBecomeActiveNotification and
UIApplicationWillResignActiveNotification; alternatively, adopt the same
post-dispatch re-check pattern used in registerAppStateObservers so observers
aren’t added after you clear _observingAppState.
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swift`:
- Around line 108-137: The format comparison in enqueue(_:) duplicates logic in
ScreenShareAudioConverter.formatsMatch(); change enqueue to avoid manual
field-by-field checks by either calling
ScreenShareAudioConverter.formatsMatch(pcm.format, targetFormat) or simply
always calling audioConverter.convertIfRequired(pcm, to: targetFormat) and using
the returned buffer (it already returns the original when no conversion is
needed), then proceed to schedule and play the buffer; update references in
enqueue to use audioConverter.pcmBuffer(from:),
audioConverter.convertIfRequired(...), and playerNode.scheduleBuffer(...)
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1c683d9e-b75e-43f1-baac-1cd400cea64f
📒 Files selected for processing (14)
android/src/main/java/com/oney/WebRTCModule/GetUserMediaImpl.javaandroid/src/main/java/com/oney/WebRTCModule/WebRTCModule.javaandroid/src/main/java/com/oney/WebRTCModule/WebRTCModuleOptions.javaios/RCTWebRTC/InAppScreenCaptureController.hios/RCTWebRTC/InAppScreenCaptureController.mios/RCTWebRTC/InAppScreenCapturer.hios/RCTWebRTC/InAppScreenCapturer.mios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swiftios/RCTWebRTC/Utils/AudioDeviceModule/AudioGraphConfigurationDelegate.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioConverter.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swiftios/RCTWebRTC/WebRTCModule+RTCMediaStream.mios/RCTWebRTC/WebRTCModule.mios/RCTWebRTC/WebRTCModuleOptions.h
ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioConverter.swift
Outdated
Show resolved
Hide resolved
ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShareAudioMixer.swift
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift (1)
182-191: Potential retain cycle with strongaudioGraphDelegatereference.The
audioGraphDelegateproperty is a strong reference. If the conforming delegate (e.g.,ScreenShareAudioMixer) holds a strong reference back to thisAudioDeviceModule, a retain cycle will occur. Consider making this a weak reference:- `@objc` public var audioGraphDelegate: AudioGraphConfigurationDelegate? + `@objc` public weak var audioGraphDelegate: AudioGraphConfigurationDelegate?The cached input context using weak references for
lastInputSourceandlastInputDestinationis appropriate since theAVAudioEngineowns these nodes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift` around lines 182 - 191, The audioGraphDelegate property is a strong reference and can cause a retain cycle if the delegate (e.g., ScreenShareAudioMixer) holds a reference back to AudioDeviceModule; change the property audioGraphDelegate to a weak reference and ensure the AudioGraphConfigurationDelegate protocol is class-bound (e.g., inherits from AnyObject) so it can be weak; update the declaration of audioGraphDelegate (and any related `@objc` exposure) to use weak and adjust any callers/implementations if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift`:
- Around line 282-289: The setVoiceProcessingBypassed(_:) method updates
source.isVoiceProcessingBypassed but fails to update
isVoiceProcessingBypassedSubject, causing observers of
isVoiceProcessingBypassedPublisher to see stale state; modify
setVoiceProcessingBypassed(_ isBypassed: Bool) to also immediately send the new
value to isVoiceProcessingBypassedSubject (matching the pattern used in
setMuted() and setRecording()), while keeping the existing
audioDeviceModule(_:didUpdateAudioProcessingState:) callback for eventual
consistency.
---
Nitpick comments:
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift`:
- Around line 182-191: The audioGraphDelegate property is a strong reference and
can cause a retain cycle if the delegate (e.g., ScreenShareAudioMixer) holds a
reference back to AudioDeviceModule; change the property audioGraphDelegate to a
weak reference and ensure the AudioGraphConfigurationDelegate protocol is
class-bound (e.g., inherits from AnyObject) so it can be weak; update the
declaration of audioGraphDelegate (and any related `@objc` exposure) to use weak
and adjust any callers/implementations if needed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4d0b71e8-ef28-4427-821a-cdd08403ae49
📒 Files selected for processing (3)
ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swiftios/RCTWebRTC/WebRTCModule.mios/RCTWebRTC/WebRTCModuleOptions.h
✅ Files skipped from review due to trivial changes (1)
- ios/RCTWebRTC/WebRTCModule.m
🚧 Files skipped from review as they are similar to previous changes (1)
- ios/RCTWebRTC/WebRTCModuleOptions.h
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift`:
- Around line 182-185: The screenShareAudioMixer property is never assigned to
the audio processing module so its RTCAudioCustomProcessingDelegate callbacks
never run; update the code that creates/initializes the
RTCDefaultAudioProcessingModule (referencing WebRTCModule.m where
capturePostProcessingDelegate is currently nil) to set its
capturePostProcessingDelegate to AudioDeviceModule.screenShareAudioMixer (the
ScreenShareAudioMixer instance) when the module is created or when screen
sharing starts, and ensure you clear/unset it when screen sharing stops to avoid
dangling delegates; locate references to RTCDefaultAudioProcessingModule,
capturePostProcessingDelegate, AudioDeviceModule.screenShareAudioMixer and
ScreenShareAudioMixer to make the assignment and lifecycle cleanup.
In
`@ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift`:
- Around line 27-31: The current implementation treats audio as mono by only
enqueuing channelData[0] into ringBuffer and reusing that single stream in
mixFromRingBuffer(...), which collapses stereo/right-only content; update the
data path so the ringBuffer and read/mix logic handle frames * channels when
targetFormat indicates stereo (or preserve original channel count): modify
enqueue(_:) in ScreenShareAudioMixer to push interleaved (or per-channel)
samples for all channels rather than only channelData[0], extend AudioRingBuffer
capacity/semantics to store frames*channels, and update mixFromRingBuffer(...)
to read and distribute samples into each output channel (or perform an explicit
mono downmix before enqueue if mono is desired) and ensure silence detection
operates on the chosen mono mix instead of the left channel only.
- Around line 31-38: The mixer mutates and reads shared state (isMixing,
targetFormat, processingSampleRate, processingChannels and the
ScreenShareAudioConverter cache) from multiple threads; add a single
synchronization primitive (e.g. a private serial DispatchQueue or an NSLock)
inside ScreenShareAudioMixer and guard all reads/writes to those properties and
the converter with it, update methods that change state (start/stop,
route-change handlers, audioProcessingInitialize) to perform mutations on that
queue, and in enqueue(_:) snapshot the needed state (isMixing, a local copy of
targetFormat and a reference to the converter) while synchronized before
performing any conversion/mixing off-queue so conversion never races with reset;
ensure converter reset/replace also happens under the same lock/queue.
- Around line 40-43: Change the floatS16Scale constant from 32768.0 to 32767.0
and ensure the mixing/conversion path in ScreenShareAudioMixer clamps the final
summed/scaled sample to the Int16 range [-32768, 32767] (i.e., after adding
inputs and multiplying by floatS16Scale but before casting to Int16). Update the
constant floatS16Scale and modify the mixer/conversion routine that converts
normalized Float32 to FloatS16 (the mixing function in ScreenShareAudioMixer
where samples are summed and scaled) to perform clamping of the final value to
avoid producing +32768 or other out-of-range values.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7f37c1f3-8e47-4754-9e15-ed39e8fcc07c
📒 Files selected for processing (5)
ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/AudioRingBuffer.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioConverter.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swiftios/RCTWebRTC/WebRTCModule.m
✅ Files skipped from review due to trivial changes (1)
- ios/RCTWebRTC/WebRTCModule.m
ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift
Show resolved
Hide resolved
ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift
Show resolved
Hide resolved
ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ios/RCTWebRTC/InAppScreenCapturer.m`:
- Around line 139-156: The rapid FG/BG race can call startRPScreenRecorder while
stopCaptureWithHandler is still in-flight; add a boolean like _stopPending that
you set to YES in appWillResignActive before calling [[RPScreenRecorder
sharedRecorder] stopCaptureWithHandler:], clear it inside that stop completion
handler, and only call startRPScreenRecorder when _stopPending is NO (i.e. in
appDidBecomeActive check _stopPending before restarting and in the stop
completion handler if _shouldResumeOnForeground && _capturing && !_stopPending
then call startRPScreenRecorder); ensure you still clear
_shouldResumeOnForeground as appropriate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9df2d525-162d-4517-887b-5f866c4cd26e
📒 Files selected for processing (3)
ios/RCTWebRTC/InAppScreenCapturer.mios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swiftios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift
🚧 Files skipped from review as they are similar to previous changes (2)
- ios/RCTWebRTC/Utils/AudioDeviceModule/AudioDeviceModule.swift
- ios/RCTWebRTC/Utils/AudioDeviceModule/ScreenShare/ScreenShareAudioMixer.swift
Summary by CodeRabbit
New Features
Bug Fixes / Reliability